Goto

Collaborating Authors

 safe use


Training-Free Safe Denoisers for Safe Use of Diffusion Models

Kim, Mingyu, Kim, Dongjun, Yusuf, Amman, Ermon, Stefano, Park, Mi Jung

arXiv.org Artificial Intelligence

There is growing concern over the safety of powerful diffusion models (DMs), as they are often misused to produce inappropriate, not-safe-for-work (NSFW) content or generate copyrighted material or data of individuals who wish to be forgotten. Many existing methods tackle these issues by heavily relying on text-based negative prompts or extensively retraining DMs to eliminate certain features or samples. In this paper, we take a radically different approach, directly modifying the sampling trajectory by leveraging a negation set (e.g., unsafe images, copyrighted data, or datapoints needed to be excluded) to avoid specific regions of data distribution, without needing to retrain or fine-tune DMs. We formally derive the relationship between the expected denoised samples that are safe and those that are not safe, leading to our $\textit{safe}$ denoiser which ensures its final samples are away from the area to be negated. Inspired by the derivation, we develop a practical algorithm that successfully produces high-quality samples while avoiding negation areas of the data distribution in text-conditional, class-conditional, and unconditional image generation scenarios. These results hint at the great potential of our training-free safe denoiser for using DMs more safely.


On the safe use of prior densities for Bayesian model selection

Llorente, F., Martino, L., Curbelo, E., Lopez-Santiago, J., Delgado, D.

arXiv.org Machine Learning

The application of Bayesian inference for the purpose of model selection is very popular nowadays. In this framework, models are compared through their marginal likelihoods, or their quotients, called Bayes factors. However, marginal likelihoods depends on the prior choice. For model selection, even diffuse priors can be actually very informative, unlike for the parameter estimation problem. Furthermore, when the prior is improper, the marginal likelihood of the corresponding model is undetermined. In this work, we discuss the issue of prior sensitivity of the marginal likelihood and its role in model selection. We also comment on the use of uninformative priors, which are very common choices in practice. Several practical suggestions are discussed and many possible solutions, proposed in the literature, to design objective priors for model selection are described. Some of them also allow the use of improper priors. The connection between the marginal likelihood approach and the well-known information criteria is also presented. We describe the main issues and possible solutions by illustrative numerical examples, providing also some related code. One of them involving a real-world application on exoplanet detection.


Artificial intelligence: MEPs want to ensure a fair and safe use for consumers News European Parliament

#artificialintelligence

The resolution addresses several challenges arising from the rapid development of artificial intelligence (AI) and automated decision-making (ADM) technologies, with a special focus on consumer protection. Parliament welcomes the potential of ADM to deliver innovative and improved services to consumers, including new digital services such as virtual assistants and chatbots. However, when interacting with a system that automates decision-making, one should be "properly informed about how it functions, about how to reach a human with decision-making powers, and about how the system's decisions can be checked and corrected", it adds. Those systems should only use high-quality and unbiased data sets and "explainable and unbiased algorithms", states the resolution. Review structures should be set up to remedy possible mistakes in automated decisions.

  Country: Europe (0.18)
  Genre: Press Release (0.40)
  Industry: Law (0.77)

Artificial intelligence: EU must ensure a fair and safe use for consumers

#artificialintelligence

Parliament's Internal Market and Consumer Protection Committee approved on Thursday a resolution addressing several challenges arising from the rapid development of artificial intelligence (AI) and automated decision-making (ADM) technologies. When consumers interact with an ADM system, they should be "properly informed about how it functions, about how to reach a human with decision-making powers, and about how the system's decisions can be checked and corrected", says the committee. Those systems should only use high-quality and unbiased data sets and "explainable and unbiased algorithms" in order to boost consumer trust and acceptance, states the resolution. Review structures should be set up to remedy possible mistakes in automated decisions. It should also be possible for consumers to seek human review of, and redress for, automated decisions that are final and permanent.


AI is a national security priority -- here's how we cultivate it

#artificialintelligence

Maintaining a competitive edge over global competition in artificial intelligence (AI) has risen swiftly over the past several years as a bipartisan national security priority. That high-level strategic guidance is now transitioning to implementation. Two important steps were taken recently. These documents made clear that delivering results for the nation on AI is the responsibility of individual federal departments and agencies doing their part. These documents also rightly advance what my CSIS colleagues and I presented in a previous report on AI and National Security termed an "AI ecosystem."